7 research outputs found

    Tighter bounding volumes for better occlusion culling performance

    Get PDF
    Bounding volumes are used in computer graphics to approximate the actual geometric shape of an object in a scene. The main intention is to reduce the costs associated with visibility or interference tests. The bounding volumes most commonly used have been axis-aligned bounding boxes and bounding spheres. In this paper, we propose the use of discrete orientation polytopes (\kdops) as bounding volumes for the specific use of visibility culling. Occlusion tests are computed more accurately using \kdops, but most importantly, they are also computed more efficiently. We illustrate this point through a series of experiments using a wide range of data models under varying viewing conditions. Although no bounding volume works the best in every situation, {\kdops} are often the best, and also work very well in those cases where they are not the best, therefore they provide good results without having to analyze applications and different bounding volumes

    Efficient multiple occlusion queries for scene graph systems

    Get PDF
    Image space occlusion culling is an useful approach to reduce the rendering load of large polygonal models. Like most large model techniques, it trades overhead costs with the rendering costs of the possibly occluded geometry. Meanwhile, modern graphics hardware supports occlusion culling. Unfortunately these hardware extensions consume fillrate and latency costs. In this paper, we propose a new technique for scene graph traversal optimized for efficient use of occlusion queries. Our approach uses several Occupancy Maps to organize the scene graph traversal. During traversal hierarchical occlusion culling, view frustrum culling and rendering is performed. The occlusion information is efficiently determined by asynchronous multiple occlusion queries with hardware-supported query functionality. To avoid redundant results, we arrange these multiple occlusion queries according to the information of several Occupancy Maps. Our presented technique is conservative and benefits from a partial depth order of the geometry

    Hardware-unterstützes Occlusion Culling für Szenengraph-Systeme

    No full text
    This thesis describes different aspects of occlusion culling algorithms for the efficient rendering of 3D scenes with rasterization hardware. Scene graphs are used as data structures for the scenes to support a wide range of different applications. All presented algorithms permit modifications of the graphs at runtime and therefore the algorithms are suitable for dynamic scenes. The thesis consists of three parts. The first part compares different techniques to determine the visibility of a single scene graph node. All techniques have the same characteristic in that they utilize the depth information on the graphics hardware in some way. Unfortunately, each access to the hardware requires some latency. Therefore the second part of this thesis presents some algorithms to reduce the number of these accesses to the graphics hardware. The algorithms take advantage of a lower resolution representation of the scene graph nodes in screen-space on the one hand, and they also use the informations of previous occlusion queries on the other. Because all the presented algorithms use the depth values of the currently rendered scene, the order of the rendering and the occlusion tests is important. Hence the third part of this thesis presents a novel algorithm for the traversal of a scene graph which efficiently utilizes hardware occlusion queries. Therefore the algorithm uses screen-space coherence in combination with a front-to-back sorted traversal of the scene graph in object-space. To determine the occlusion, the algorithm bundles individual occlusion tests in multiple occlusion queries. These can be asynchronously performed to reduce the latency. All presented algorithms deliberately do not use special -- spatial -- data structures for the scene to avoid long preprocessing times or restrictions in the use of dynamic scenes. Also, the algorithms do not exploit temporal coherence between the frames, because this results in limitations for dynamic and interactive scenes. However, the presented algorithms admit an efficient rendering of scenes with high depth complexity.Die vorliegende Arbeit behandelt verschiedene Aspekte der Verdeckungsrechnung zur effizienteren Darstellung dreidimensionaler Szenen mit Hilfe von Rasterisierungshardware. Alle betrachteten Algorithmen verwenden einen herkömmlichen Szenegraphen als grundlegende Datenstruktur. Dadurch lassen sie sich unmittelbar zahlreichen Anwendungen zur Verfügung stellen. Alle Algorithmen erlauben es, den Szenegraphen zur Laufzeit zu modifizieren, und lassen sich somit auch auf dynamische Szenen anwenden. Die Arbeit teilt sich auf in drei Bereiche. Im ersten Abschnitt wird auf die verschiedenen Möglichkeiten eingegangen, die Verdeckung eines einzelnen Knotens aus dem Szenegraphen zu ermitteln. Die vorgestellten Algorithmen implementieren dabei verschiedene Methoden, um die auf der Graphikhardware gespeicherte Tiefeninformation auszunutzen. Allerdings benötigt jeder Zugriff auf die Graphikhardware eine kurze Zeitspanne. Der zweite Abschnitt beschäftigt sich daher damit, Zugriffe auf die Hardware zu reduzieren. Dabei wird einerseits eine vereinfachte Repräsentation der Szenengraphknoten im Bildraum und andererseits die Informationen von bereits durchgeführten Verdeckungstests verwendet. Da alle vorgestellten Algorithmen in irgendeiner Form von den Tiefenwerten von bereits gezeichneten Teilen der Szene abhängen, ist die Reihenfolge der Darstellung und der Verdeckungstests von zentraler Bedeutung. Im dritten Teil der Arbeit wird daher ein neuer Traversierungsalgorithmus für den Szenengraphen vorgestellt, der die von der Graphikhardware zur Verfügung gestellten Verdeckungstests besonders effizient ausnutzen kann. Dazu sucht der Algorithmus nach Kohärenzen im Bildraum in Kombination mit einer sortierten Traversierung der Knoten im Objektraum. Um letztendlich die Verdeckung eines Objekts zu bestimmen, bündelt der Algorithmus die Verdeckungstests in Mehrfachanfragen, die es erlauben, mehrere unabhängige Verdeckungstests asynchron durchzuführen. Ganz bewusst wurde auf die Verwendung von speziellen -- räumlichen -- Datenstrukturen für die Szene verzichtet, um lange Vorberechnungszeiten oder Einschränkungen für dynamische Szenen zu vermeiden. Ebenso wurde darauf verzichtet, zeitliche Kohärenz zwischen einzelnen Bildfolgen auszunutzen, da dies Einschränkungen für interaktive Szenen zur Folge hätte. Gleichwohl erlauben die vorgestellten Algorithmen eine effiziente Darstellung beliebiger Szenen mit hoher Tiefenkomplexität

    Efficient Multiple Occlusion Queries for Scene Graph Systems

    No full text
    Image space occlusion culling is an useful approach to reduce the rendering load of large polygonal models. Like most large model techniques, it trades overhead costs with the rendering costs of the possibly occluded geometry. Meanwhile, modern graphics hardware supports occlusion culling. Unfortunately these hardware extensions consume fillrate and latency costs

    First experiences with a mobile platform for flexible 3d model acquisition in indoor and outdoor environments -- the wägele

    No full text
    Efficient and comfortable acquisition of large 3D scenes is an important topic for many current and future applications like cultural heritage, web applications and 3DTV and therefore it is a hot research topic. In this paper we present a new mobile 3D model acquisition platform. The platform uses 2D laser range scanners for both self localization by scan matching and geometry acquisition and a digital panorama camera. 3D models are acquired just by moving the platform around. Thereby, geometry is acquired continuously and color images are taken in regular intervals. After matching, the geometry is represented as unstructured point cloud which can then be rendered in several ways, for example using splatting with view dependent texturing. The work presented here is still “in progress”, but we are able to present some first reconstruction results of indoor and outdoor scenes

    Jupiter: a toolkit for interactive large model visualization

    No full text
    The fast increasing size of datasets in scientific computing, mechanical engineering, or virtual medicine is quickly exceeding the graphics capabilities of modern computers. Toolkits for the large model visualization address this problem by combining efficient geometric techniques, such as occlusion and visibility culling, mesh reduction, and efficient rendering. In this paper, we introduce Jupiter, a toolkit for the interactive visualization of large models which exploits the above mentioned techniques. Jupiter was originally developed by Hewlett-Packard and EAI, and it was recently equipped with new functionality by the University of Tübingen, as being part of the Kelvin project. Earlier this year, an initial version of Jupiter was also released as open source
    corecore